Halifax
- North America > United States > Virginia (0.04)
- Europe > France (0.04)
- North America > Canada > Nova Scotia > Halifax Regional Municipality > Halifax (0.04)
- (4 more...)
- Overview (0.67)
- Research Report > New Finding (0.67)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > Middle East > Jordan (0.04)
- (11 more...)
- Overview (0.67)
- Research Report > New Finding (0.46)
- Oceania > Australia > New South Wales > Sydney (0.14)
- North America > Canada > Quebec > Montreal (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (14 more...)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- (13 more...)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > Greenland (0.04)
- (10 more...)
- Law (1.00)
- Government (0.93)
Ubisoft cancels projects and announces restructure in fight to stay competitive
Ubisoft, the video games publisher behind the Assassin's Creed series, has cancelled projects and announced a restructuring that will close several studios as a result of several years of weak results and disappointing sales. Ubisoft, the video games publisher behind the Assassin's Creed series, has cancelled projects and announced a restructuring that will close several studios as a result of several years of weak results and disappointing sales. The video game publisher behind the Assassin's Creed series has cancelled six projects including a remake of Prince of Persia: The Sands of Time as it fights to stay competitive in the global gaming market. Ubisoft announced a sweeping reorganisation and said it would cancel six games, sending its shares to their lowest level in more than a decade on Thursday. Ubisoft is abandoning development of six titles, including a highly anticipated remake of Prince of Persia - a series that dates back to 1989 and received an ill-fated Hollywood adaptation in 2010 - and delaying a further seven. Studios in Halifax, Canada and Stockholm are being closed, with restructuring to follow in other countries, it said.
- North America > Canada > Nova Scotia > Halifax Regional Municipality > Halifax (0.25)
- Europe > Sweden > Stockholm > Stockholm (0.25)
- North America > United States (0.16)
- (2 more...)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Games (1.00)
Ubisoft cancels six games including Prince of Persia and closes studios
Ubisoft has cancelled six video games - including its long-awaited Prince of Persia: The Sands of Time remake - as part of a major reset of its operations. The French developer and publisher, known for popular games such as Assassin's Creed, Far Cry and Just Dance, has closed two studios and delayed seven titles as part of its changes. Ubisoft boss Yves Guillemot said the move would create the conditions for a return to sustainable growth. The firm's shares plunged by 33% on Thursday morning following the announcement. The move comes at a time when studios are increasingly turning to video game remakes and remasters, with new versions of Super Mario Galaxy, Oblivion and Metal Gear Solid 3 proving popular in 2025.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (16 more...)
Understanding Syntactic Generalization in Structure-inducing Language Models
Arps, David, Sajjad, Hassan, Kallmeyer, Laura
Structure-inducing Language Models (SiLM) are trained on a self-supervised language modeling task, and induce a hierarchical sentence representation as a byproduct when processing an input. SiLMs couple strong syntactic generalization behavior with competitive performance on various NLP tasks, but many of their basic properties are yet underexplored. In this work, we train three different SiLM architectures from scratch: Structformer (Shen et al., 2021), UDGN (Shen et al., 2022), and GPST (Hu et al., 2024b). We train these architectures on both natural language (English, German, and Chinese) corpora and synthetic bracketing expressions. The models are then evaluated with respect to (i) properties of the induced syntactic representations (ii) performance on grammaticality judgment tasks, and (iii) training dynamics. We find that none of the three architectures dominates across all evaluation metrics. However, there are significant differences, in particular with respect to the induced syntactic representations. The Generative Pretrained Structured Transformer (GPST; Hu et al. 2024) performs most consistently across evaluation settings, and outperforms the other models on long-distance dependencies in bracketing expressions. Furthermore, our study shows that small models trained on large amounts of synthetic data provide a useful testbed for evaluating basic model properties.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Austria > Vienna (0.14)
- (25 more...)
Exploring Test-time Scaling via Prediction Merging on Large-Scale Recommendation
Lyu, Fuyuan, Chen, Zhentai, Jiang, Jingyan, Li, Lingjie, Tang, Xing, He, Xiuqiang, Liu, Xue
Inspired by the success of language models (LM), scaling up deep learning recommendation systems (DLRS) has become a recent trend in the community. All previous methods tend to scale up the model parameters during training time. However, how to efficiently utilize and scale up computational resources during test time remains underexplored, which can prove to be a scaling-efficient approach and bring orthogonal improvements in LM domains. The key point in applying test-time scaling to DLRS lies in effectively generating diverse yet meaningful outputs for the same instance. We propose two ways: One is to explore the heterogeneity of different model architectures. The other is to utilize the randomness of model initialization under a homogeneous architecture. The evaluation is conducted across eight models, including both classic and SOTA models, on three benchmarks. Sufficient evidence proves the effectiveness of both solutions. We further prove that under the same inference budget, test-time scaling can outperform parameter scaling. Our test-time scaling can also be seamlessly accelerated with the increase in parallel servers when deployed online, without affecting the inference time on the user side. Code is available.
- Europe > Austria > Vienna (0.14)
- North America > Canada > Quebec > Montreal (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- (17 more...)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
Distributed Dynamic Associative Memory via Online Convex Optimization
Wang, Bowen, Zecchin, Matteo, Simeone, Osvaldo
An associative memory (AM) enables cue-response recall, and it has recently been recognized as a key mechanism underlying modern neural architectures such as Transformers. In this work, we introduce the concept of distributed dynamic associative memory (DDAM), which extends classical AM to settings with multiple agents and time-varying data streams. In DDAM, each agent maintains a local AM that must not only store its own associations but also selectively memorize information from other agents based on a specified interest matrix. To address this problem, we propose a novel tree-based distributed online gradient descent algorithm, termed DDAM-TOGD, which enables each agent to update its memory on the fly via inter-agent communication over designated routing trees. We derive rigorous performance guarantees for DDAM-TOGD, proving sublinear static regret in stationary environments and a path-length dependent dynamic regret bound in non-stationary environments. These theoretical results provide insights into how communication delays and network structure impact performance. Building on the regret analysis, we further introduce a combinatorial tree design strategy that optimizes the routing trees to minimize communication delays, thereby improving regret bounds. Numerical experiments demonstrate that the proposed DDAM-TOGD framework achieves superior accuracy and robustness compared to representative online learning baselines such as consensus-based distributed optimization, confirming the benefits of the proposed approach in dynamic, distributed environments.
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California (0.04)
- North America > Canada > Nova Scotia > Halifax Regional Municipality > Halifax (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)